perm filename NOTES.APE[LSP,JRA]8 blob sn#305698 filedate 1977-09-16 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00011 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.SS(Review and Reflection,,P85:)
C00022 00003	.<<c w mmorris>>
C00035 00004	.<<abstract syntax>>
C00044 00005	.<<lisp borrows>>
C00051 00006	.<<abstraction to function>>
C00082 00007	As we move again from procedural notions to  more denotational
C00094 00008	.GROUP
C00104 00009	To describe the evaluation of a function-call in LISP we must add
C00110 00010	One of the major difficulties in supplying models for applicative languages
C00121 00011	.CENT(Problems)
C00126 ENDMK
C⊗;
.SS(Review and Reflection,,P85:)
.BEGIN ep;
"I think that it is important to maintain a view of the field that helps minimize
the distance between theoretical and practical work."
.END
.pt24;
.BEGIN TURN ON"→";
→Saul Amarel, ⊗↑[Ama#72]↑########
.END
.PT24;PT24;PT2
.FP
By way of review we
sketch the basic LISP evaluator of {yonss(P6)}:
%3eval%* plus the additional artifacts for %3label%1 and %3function%1.

There are two arguments to %3eval%*: a %2form%*⊗↓Throughout this section we
will say "form", "variable", "λ-expression", etc.  rather than "a
representation of a" ... "form", "variable", "λ-expression", etc. No
confusion should result, but remember that we %6are%* speaking imprecisely.←, that
is, an expression which can be evaluated; and  an association list or
%2symbol table%*. If the form is a constant, return that
form. If the form is a variable, find the value of the variable in the current
environment.
If the form is a conditional expression, then evaluate it
according to the semantics of conditional expressions.

The form might also be a functional argument. In this case evaluation consists of
associating the current environment 
with the function and returning that construct as value; 
in LISP this is done with the %3funarg%* device.
Any other form seen by %3eval%* is assumed to be an
application, that is, a  function followed by arguments.
The arguments are evaluated from left-to-right and the function is then applied
to these arguments. 

The part of the evaluator which handles function application is called %3apply%*.
%3apply%* takes three arguments: a LISP function, a list of evaluated arguments, and
a symbol table. If the function is one of the five LISP primitives
then the appropriate action is carried out. If the function is a λ-expression
then bind the formal parameters (the λ-variables) to the evaluated arguments and
evaluate the body of the function. The function might also be the result of a
functional argument binding; in this case apply the function to the arguments
and use 
the saved environment, rather than  the given environment, to search for
non-local bindings.
If we are applying  the %3label%1 operator, 
recalling {yon(P192)}, we  build a %3funarg%1-triple 
and new environment such that the environment bound in the triple is the
new environment. We then apply the function to the arguments in this new 
environment.

.P227:
If the function has a name 
we look up that name in the current environment.
Currently we expect that value to be a λ-expression, which is then applied. 
However, since function names are just variables, there is no reason that the
value of a function name could not be a simple value, say an atom, and 
%6that%* value can be
applied to the arguments.
 Examination of %3apply%* on {yon(P69)}
will show that %3apply[X;#((A#B))#;#((X#.#CAR)#...#)]%* %6will%* be handled correctly.
The natural extension of this idea is to allow a %2⊗→computed function↔←%*. That
is, if the function passed to %3apply%* is not recognized as one of the preceding
cases, then continue to evaluate the function-part until it is recognized. 
.GROUP;
We will allow such forms as:
.BEGIN CENTERIT;SELECT 3;
.BOXA
←((CAR#(QUOTE#(CAR#(A#.#B))))#(QUOTE#(A#.#B)))
.BOXB
.END
.APART
.BEGIN GROUP;TABIT3(34,47,54);
.FP
The addition of computed functions modifies %3apply%1 ({yon(P69)}) slightly:
.SELECT 3;
.BOXA
apply <= λ[[fn;args;environ]\[iscar[fn] → car[arg%41%*[args]];
\ iscons[fn] → cons[arg%41%*[args];arg%42%*[args]];
\        ...                ... 
\ islambda[fn] → eval[\body[fn];
\\\pairlis[vars[fn];args;environ]];
\ %Et%3 → apply[\eval[fn;environ];
\\args;
\\environ] ]]
.BOXB
.END
.FP
The subset of LISP which is  captured by this evaluator is 
 a strong and useful language even though it lacks several of the more
common programming language features⊗↓It is "strong", both practically
and theoretically. It is sufficiently powerful
to be "universal" in the sense that all computable functions
are representable in LISP. In fact the %3eval-apply%1 pair
represents a "universal function", capable of simulating
the behavior of any other computable function. The
usual arguments about the halting problem#(⊗↑[Rog#67]↑ and {yon(P288)}) 
and related matters
are easily expressed and proved in this LISP. Indeed, the
original motivation for introducing the M-to-S expression
mapping was to express the language constructs in the
data domain so that universality could be demonstrated. 
"S-expression#LISP" was used as a programming
language only as an afterthought. 
It was S.#Russell who convinced Mc#Carthy that the
theoretical device represented in %3eval%1 and %3apply%1
could in fact be programmed on the IBM704.←. This subset
is called the %2applicative subset of LISP%1, since its
computational ability is based on the mathematical idea of
function application. We have persistently referred to our LISP
procedures as LISP functions, even though we have seen some
differences between the concept of function in mathematics and the concepts
of procedure  in LISP. It is the mathematical idea of function which the
procedures of an applicative language are trying to capture.
Regardless of differences in syntax and evaluation
schemes, the dominant characteristic of an applicative language  is 
that a given "function" applied to a given set of arguments 
%6always%1 produced the same result: either the computation
produces an error, or it doesn't terminate, or it  produces 
a specific value.
The applicative subset of LISP uses dynamic binding and therefore
the occurrence of free variables calls the environment into play.
But still, we have no way 
to destructively change the environment. Constructs which do have such an
ability are said to have %2side-effects%1 and are discussed in the
next chapter.


LISP was the first language to exploit procedures as objects of the language.
The idea has been generalized substantially in the intervening
years. A concise statement of the more general principle appears in ⊗↑[Pop#68a]↑.

.BEGIN GROUP;INDENT 10,10,10
.P284:
"...This brings us to the subject of items.
Anything which can be the value of a variable is an item. %6All items
have certain fundamental rights.%1
.BEGIN INDENT 13,13;
.NOFILL;

1. All items can be the actual parameters of functions 
2. All items can be returned as results of functions
3. All items can be the subject of assignment statements⊗↓This issue will be postponed
until {yonsec(P274)}.←
4. All items can be tested for equality.  
.END
 ..."

.END
.FP
LISP  performs well on the first two principles, allowing LISP functions
to be the arguments as well as the results of functions. The fourth
principle is more difficult; that is, test for the 
equality of two LISP functions.  In can be shown (⊗↑[Rog#67]↑) that
no algorithm can be constructed which will perform such a test for arbitrary functions.
However in more selective settings, program equality can be established,
both theoretically (⊗↑[Man#74]↑), and practically (⊗↑[Boy#75]↑, ⊗↑[Dar#73]↑,
⊗↑[Car#76]↑).

.GROUP;
The language  POP-2#(⊗↑[Pop#68]↑)  has also generalized the notion of function application
in a slight, but significant, way.  The generalization is called
%2⊗→partial application↔←%1. Given a function 
.BEGIN CENTER;SELECT 3;
.BOXA
f <= λ[[x%41%3; ... ;x%4n%3] %9x%3]%1,
.BOXB
.END
.FP
we compute a new function of %3n-m%1 arguments by applying %3f%1
to %3m%1 arguments and obtain a function of the remaining arguments
%3x%4m+1%1 through %3x%4n%1:
.BEGIN CENTER;SELECT 3;
.BOXA
 λ[[x%4m+1%3; ...x%4n%3] %9x'%3] = f[t%41%3; ...; t%4m%3] 
.BOXA
.END
.FP
where %9x'%1 results from %9x%1 by binding %3x%41%1 through %3x%4m%1 to
%3t%41%1 through %3t%4m%1 respectively⊗↓POP-2 actually binds the %6last%1 %3m%1
arguments.←
Further generalizations of partial application are imaginable (⊗↑[Sta#74]↑).
.APART;

We have pointed out several "procedural" differences. Our treatment
of conditional expressions differs from the usual treatment of
function application: our standard rule for function application
is "call#by#value" which requires the evaluation of all arguments
before calling the LISP function;  only some of the
arguments to a conditional expression are evaluated. Note that the whole
question of "evaluation of arguments" is a procedural one; arguments
to functions aren't "evaluated" or "not evaluated", they just "are".

We have seen different algorithms computing the same function; for
example %3fact%1 ({yon(P21)} and 
{yon(P99)}) and %3fact%41%1#({yon(P21)}) all compute the factorial
function. If there are different algorithms for computing
factorial, then there are different alorithms for computing
the value of an expression, and %3eval%1 is just one such algorithm.
Just as the essence of %3fact%1 and %3fact%41%1 is the factorial function,
we should capture the essence expressed in %3eval%1.
Put another way, when  applications programmers use %3sqrt%1 or %7p%1
they have a specific mathematical function or constant in mind.
The implementation of the language supplies approximations to these
mathematical  entities, and assuming the computations stay within the
numerical ranges of the machine, the programmers are free to
interpret  any output as that which the mathematical entities would
produce. More importantly the programmers have placed specific interpretations
or meanings on symbols. We are interested in doing the same thing only
we wish to produce a %6freer%1 interpretation, which only mirrors
the essential ingredients of the language constructs. That is,
%3sqrt%1 represents a %6function%1 and %7p%1 represents a %6constant%1.
The %3eval-apply%1 pair gives one interpretation to the meaning of functions
and constants, but there are different interpretations and there are
different %3eval-apply%1 pairs.
The remainder of this section  will resolve some of the tension between
function and procedure.


What does this discussion have to do with 
programming languages? Clearly the order of
evaluation or reduction is directly  applicable. 
Our study will also  give insights 
into the problem of language specification.
Do we say that
the language  specification consists of a syntactic component and some
 description of the evaluation of constructs in that language?
Or do we say that these two components, syntax and a machine, are merely
devices for describing  or formalizing notions about  some abstract  domain of
discourse?  A related question for programmers and language designers
involves the ideas of correctness and equivalence of programs. How do we
know when a program is correct?  This requires some notion of a standard
against which to  test our implementations⊗↓"Unless
there is a prior mathematical definition of a language at hand,
who is to say whether a proposed implementation is %6correct%1?"#⊗↑[Sco#72]↑.←. 
If our algorithms really are  reflections of
 functional properties, then we should develop methods
for verifying that those properties are expressed in our algorithms. Against this
we  must balance the realization that many programs don't fit neatly
into this mathematical framework. Many programs are best characterized
as themselves. In this case we should then be interested in 
verfying equivalence of programs. If we  develop a new algorithm
we have some responsibility to demonstrate that the algorithms are equivalent
in very well defined ways⊗↓Current theory is inadequate for dealing with
 many real programming tasks. However the realization that
one has a responsibility to consider the questions, %6even informally%1,
is a sobering one  which more programmers should experience.←.

The study of formal systems in mathematical logic offers insight.
There, we are presented with a syntax and a system of axioms and rules of inference.
Most usually we are also offered a "model theory" which gives us 
interpretations 
 for the objects of the formal system; the model theory  supplies
additional means of giving convincing arguments for the validity of statements
in the formal system.  The arguments made within the formal system
are couched in terms of "provability" whereas arguments of the model theory are
given in terms of "truth". 
For a discussion of formal systems and model theory see ⊗↑[Men#64]↑.

.<<c w mmorris>>
.GROUP;
C. W. Morris (⊗↑[Mor#55]↑) isolated three perspectives on language, syntax, pragmatics,
and semantics. 

.BEGIN INDENT 0,8;GROUP;

%2Syntax%*: The synthesis and analysis of sentences in  a language. This area is
well cultivated in programming language  specification.
.END
.APART;
.BEGIN INDENT 0,11;GROUP;

%2Pragmatics%*: The relation between the language and its user. 
Evaluators, like %3tgmoaf, tgmoafr%1 and %3eval%1, come under 
the heading of pragmatics.
Pragmatics are more commonly referred to as operational semantics in discussions
of programming languages.
.END
.BEGIN INDENT 0,10;GROUP;

%2Semantics%*: The relation between constructs of the language and the abstract
objects which they denote. This subdivision is commonly referred to as denotational
semantics.

.END
.FP
Put differently, syntax describes appearance, pragmatics describes
implementation,  and
semantics describes meaning⊗↓This  division of language
reflects an  interesting parallel
between mathematical logic and programming languages: in mathematical logic
we have deduction, computation, and truth; in programming language specification
we have three analogous schools 
of thought: axiomatic, operational, and denotational.
We won't go into details here, but
see ⊗↑[Men#64]↑   for the mathematical logic and ⊗↑[Dav#76]↑ 
for a study of the interrelationships;
see ⊗↑[Hoa#69]↑
for a discussion of the axiomatic school;  we will 
say more about the operational and denotational schools in this
section.←.
Though there is a strong concensus on the syntactic tools for specifying
languages⊗↓But see ⊗↑[Pra#73]↑ for a contrary position.←, there is no concensus on 
adequate pragmatics and even  less agreement on the adequancy of
semantic descriptions.
We will first outline the pragmatics questions and then discuss a bit 
more of the semantics issues.
In this discussion we will use the language distinctions of Morris
even though the practice is not commonly followed in the literature.
Typically, syntax is studied  precisely and semantics is "everything else";
however the distinction between computation (pragmatics)  and truth (semantics)
is important and should not be muddled.


One thought is to describe the pragmatics of a language in terms of the process
of compilation.  That is, the pragmatics is specified by some canonical compiler
producing code for some well-defined simple machine.  The meaning of
a program is the outcome of an interpreter interpreting this 
code.
But then,
to understand the meaning of a particular construct,  this proposal 
requires that you understand the description 
of a compiler 
and understand the simple machine.
Two problems arise immediately. Compilers are not particularly 
transparent  programs. Second, a very simple machine  may not adequately
reflect the actual mechanisms used in an implementation. This aspect is
important if the  description is to be meaningful to an implementor.

A more fundamental difficulty is apparent when  we consider the practical aspects
of this proposal.
When asked to understand  a program written in  a high-level language you think  
about the %6behavior%1 of that program in a very direct way. The pragmatics
is close to the semantics.
You think about the computational behavior as it executes; you
do not think about the code that gets generated and then think about the
execution of that code.  

A more natural pragmatics for the
constructs is given in terms of the execution of these constructs;
 thus %3eval%* is the pragmatic
description of LISP. 
The %3eval%* function describes the execution sequence of a
representation of an arbitrary LISP expression. 
That  description  has a flavor of circularity which
some find displeasing. However some circularity 
in description is inevitable; we must assume that 
something is known and does not require further explication.
If language L%41%*  is described in  terms of a
simpler language L%42%* then either L%42%* is "self#evident" or 
a   description
L%42%* must be given.


So, realistically, the choice is where to stop, not whether to stop.
The LISP position is that the language and data structures are sufficiently
simple that self-description is satisfactory.
Attempts have been made to give non-circular interpreter-based
descriptions of semantics for languages other than LISP.  There is a
 Vienna Definition Language (⊗↑[Weg#72]↑) description of PL/1, 
and a description  of ALGOL 68 (⊗↑[Alg#75]↑)
by a Markov algorithm. Both these attempts result in a description
which is long and unmanageable for all but the most persistent reader.

There are some compelling  reasons for deciding on direct circularity.  First,
you need only learn one language.  If the  specification is given
the source language, then you learn the programming language and the
specification language at the same time.  Second, since the evaluator is
written in the language, we can understand the language by understanding
the workings of the single program, %3eval%*; and if we wish to modify the
pragmatics, we need change only one collection of high-level programs.  
If we
wished to add new language constructs to LISP  we
need only modify %3eval%* so that it recognizes an occurrence of that new
construct,  and add a  function to describe the
interpretation of the construct.
That modification might be extensive,
 but  it will
be a controlled revision  rather than a complete reprogramming effort.

There is another common method for specifying 
the pragmatics of a programming language. The Algol report (⊗↑[Alg#63]↑)
introduced a standard   for syntax specification: the
BNF equation. It also  gave a reasonably precise description of the
pragmatics of Algol statements in natural language.
The style
of presentation was concise and clear, but suffers from the imprecision
of natural language.
Regardless,
this style  of description is quite common and is very useful. A recent
report#(⊗↑[Moor#75a]↑) on the pragmatics of InterLISP  used this descriptive style.
If the language is quite complex, then a formal description can be
equally complex. In {yonss(p211)} we will see that our interpreter definition
will extend nicely to richer subsets of LISP.


Regardless of the descriptive method used,
a language description should not attempt to explain everything about a language.
It need only explain what needs explaining.
You must assume that your reader understands something ... . McCarthy: %6`Nothing
can be explained to a stone'%*#⊗↑[McC#66]↑.
A  description of a language must be natural and must match the expressive
power of the language. That is, it  should  model how the constructs
are to be implemented.   What is needed is a description
 which exploits, rather than ignores, the structure of the language.
Mathematical notation is no substitute for clear thought,
but we believe careful formulations of semantics will lead to additional
insights in  the pragmatics of language 
design ⊗↓R. D. Tennent has invoked this approach in
the design of %3QUEST%*#(⊗↑[Ten#76]↑).←.  The task requires new mathematical tools
to model the language constructs, and requires increased care on the part of the
language designer.

Let's look at  the issue of syntax for a moment.  
A satisfactory description of much of
programming language syntax is standard BNF.  The BNF is a generative, or synthetic
grammar since the rewriting rules specify how to %6generate%1  well formed
strings.  An evaluator, on the other hand,
executes the  already existing program.  This implies that
our evaluator is %6analytic%1 
rather than synthetic; it must have some way of analyzing the structure of 
the given program.


.<<abstract syntax>>
.BEGIN GROUP;
.P75:
In ⊗↑[McC#62]↑, John McCarthy introduced the concept of abstract 
⊗→analytic syntax↔←. The idea is directly derivable from the LISP
experience. The syntax
is analytic, rather than synthetic, in the sense that it tells how 
to take programs apart#--#how to recognize
and select subparts of programs using predicates and selector functions⊗↓We will
deal with abstract %6synthetic%1 syntax when we discuss compilers.←.
It is abstract
in the sense that it makes no commitment to the external representation of the
constitutents of the program.  We need only be able to recognize  the occurrence
of a desired construct. For example:
.TURN ON "←";

.BEGIN GROUP;SELECT 3;TABIT1(21);
.BOXA
isterm <= λ[[t] or[\isvar[t];
\isconst[t];
\and[issum[t];isterm[addend[t]];isterm[augend[t]]]]]
.BOXB
.END
.FP
%1and the BNF equation:
.BOXA
←<term> ::= <var> | <const> | <term> + <term>.
.BOXB
.FP
%3issum, addend,%1 and %3augend%1, don't really care whether the sum is 
represented as 
x+y, or %3+[x;y]%1 or %3(PLUS#X#Y)%* or %2xy+%1.  
The different external representations
are reflections of different ⊗→concrete syntax↔←es; the  BNF equation 
above gives one.
Parsing  links a concrete syntax with the abstract syntax.
.END

Since  the evaluator must 
operate on %6some%1 internal representation of the source program, 
a representation reflecting the structure of the program 
is most natural. 
This representation is more commonly known
as a parse tree. 
We can describe the evaluation of a program in terms of a function
which operates on a parse tree using the predicates and selectors of the
analytic syntax. Abstract syntax concerns itself only with those properties
of a program which are of interest to an evaluator.

.GROUP;
The Meta expression  LISP  constitutes a concrete syntax.
The M-to-S-expression translator is the parser which maps
the external representation onto a parse (or computational) tree. 
The selectors
and predicates of the analytic syntax are straightforward. Recall the BNF for
LISP:

.BEGIN TABIT1(11);GROUP;
.BOXA
<form>\::= <constant>
\::= <variable>
\::=<function>[<arg>; ... ;<arg>]
\::= [<form> → <form>; ... ;<form> → <form>]
\  ....
.BOXB
.END
.APART
.FP
We need to  write a parser which takes instances of these equations onto
an internal representation. 
Much is known about parsing techniques (⊗↑[Aho#72]↑,
also see  {yonss(P105)} and {yonss(P286)})
so we will concentrate on the post-parse processing.

Our evaluator will operate of the parse tree and will therefore need
 to recognize representations of constants,
variables, conditional expressions, and applications.
We  need to write a function in some language expressing the values
of these constructs. Since the proposed evaluator is to manipulate parse
trees, and since the domain of LISP functions %6is%* binary trees, it should seem
natural to use LISP itself.  If this is the case, then we must represent the parse
tree as a LISP S-expr and represent the selectors and  recognizers as LISP functions and
predicates.

.BEGIN SELECT 3;TABIT1(21);GROUP;
.FP
%1Perhaps:%3
.BOXA
isconst <= λ[[x] or[\numberp[x];
\eq[x;NIL];
\eq[x;T];
\and[not[atom[x]];eq[first[x];QUOTE]
.PT18
isvar <= λ[[x] and[atom[x]; not[isconst[x]]]]
.PT18
iscond <= λ[[x] eq[first[x];COND]]
.BOXB
.END
.FP
There are really two issues here: a representation of the analytic syntax of a language,
and a representation in terms of the language %6itself%1.  
In conjunction, these two
ideas give a natural and very powerful means of specifying languages.
If this style of specification is really effective,
then we should be able to 
specify other languages  in a similar fashion.
One of the weak points of LISP as a
 programming language is the insistence on binary tree representations
of data⊗↓Many `production' versions of LISP have array, string, and even record
structures available for use. However the programmer must
explicitly  request  and manipulate such storage structures. We would rather
develop 
techniques in which  the storage structures are %6implied%1 either by
the types of operations desired, or by  the specification of the abstract
data struture, or by interaction between the programming system and the
user.←.  Many applications could profit from other
data representations.  What we would then like is a language which has richer
data structures than LISP but which is designed to allow LISP-style semantic specification.
We would be able to write an evaluator, albeit more complex than %3eval%*, in the language
itself.  The evaluator would operate on a representation of the program as a 
data structure of the language, just as %3eval%* operates on the S-expr translation
of the LISP program.  The concrete syntax would be specified as a set of BNF equations,
and our parser would translate  strings  into parse trees.

.BEGIN TABIT1(30);GROUP;
In outline, to  specify a construct  we must have at least the following:
.BOXA
.NL
%21.%*##A concrete production.
.NL
%22.%*##An abstract data type.
.NL
%23%*.##A mapping from %21%* to %22%*.
.NL
%24.%*##An evaluator for the abstract type.
.BOXB
.END
.FP
In {yonsec(P256)} we will sketch a recent LISP-like language, EL1, which
%6does%1 apply  these principles⊗↓Compare  steps  %21%1 through %24%1 with those on 
{yon(P116)}.←.

From a discussion of syntax we have gravitated back to evaluation.
After we reduce the questions of syntax of programming languages to questions
of abstract syntax and stripped way the  syntactic differences,
how many  %6real%1 differences between languages are there?
Semantics addresses this issue.



.<<lisp borrows>>
Many of the semantic ideas in applicative programming
languages can be captured in %9λ%1-calculus (⊗↑[Chu#41]↑). 
The %9λ%1-calculus was developed to supply a formalism 
for discussing the notions of function and function application. 
Rather than develop the syntax of %9λ%1-calculus, we will
use LISP-like syntax and show how we can abstract from the procedural
 LISP function  to a  mathematical function.

Most of the description of LISP which we have given so far is classified as ⊗→operational↔←.
That means our informal description of LISP and our later description of the LISP
interpreter are couched in terms of "how does it work  or operate". Indeed
the purpose of %3eval%* was to describe explicitly what     happens
when a LISP expression is  evaluated.  We have seen ({yon(P84)}) that discussion
of evaluation schemes is non-trivial; and that order of evaluation can effect the
outcome ({yon(P17)}).  
.BEGIN TURN ON "#";
.P91:
However, many times the order of evaluation is immaterial
⊗↓"One difficulty with the operational approach is that it (frequently)
specifies too much": C. Wadsworth.←. 
We saw on {yon(P93)} that %3eval%* will perform without complaint when
given a form %3f[#...#]%* supplied with too many arguments.
How much of %3eval%* is "really" LISP and how much is "really" implementation?
On one hand we have said that the meaning of a LISP expression is
by#definition  what %3eval%* will 
calculate using the representation of the expression. On the other hand,
we claim  that %3eval%* is simply %6an implementation%*.
There are certainly other implementations; i.e, other LISP functions %3eval%4i%1
which have the same input-output behavior as our %3eval%*. 
The position here is not that %3eval%* is wrong in giving
values to things like %3cons[A;B;C]%*, but rather we want to know what is it that
%3eval%* implements.

This other way of looking at meaning in programming languages is exemplified by
⊗→denotational↔← or mathematical semantics.
.P96:
This perspective springs from the philosophical  or logical
device of distinguishing between a %6representation%* for an object, and the
object itself. The most familiar example is the numeral-number distinction.
Numerals are notations (syntax) for talking about %2numbers%* (semantics).
thus the Arabic %2numerals%* %32, 02%*, the Roman numeral %dII%*,
and the Binary notation %310%*, all %6denote%* or represent
the %2number%* two. In LISP, %3(A#B), (A#.#(B)), (A#,#B)%* and %3(A#.#(B#.#NIL))%*
all are notations for the same S-expr.
We want to say that a LISP form 
%3car[(A#.#B)]%1 %2denotes%* %3A%1, or %3car[A]%1 denotes undefined  just
as we say in mathematics that 2+2 denotes 4 or 1/0 is undefined. 

Similarly, we will say that
the denotational counterpart of a LISP function  is  a 
 mathematical  function  defined over an appropriate abstract domain.
Before  proceeding, we introduce some conventions
to distinguish notation from %6de%*notation. 
.BEGIN GROUP;
We will continue to use italics:
.BEGIN CENTER;
.BOXA
%3A, B, ..., x, ..., car, ..., (A . B) %* 
.BOXB
.END
.FP
as notation in LISP expressions.
Gothic bold-face:
.BEGIN CENTER;
.BOXA
%dA, B, ..., x, ..., car, ..., (A . B)%* 
.BOXB
.END
.FP
will represent denotations.
.END
.FP
Thus %3A%* is notation for %dA%*;
%3car[cdr[(A#.#(B#.#C))]]%* denotes %dB%*; the mapping, %dcar%* is the denotation
of the LISP function %3car%*.

.<<abstraction to function>>

Several areas of LISP must be  subjected to an abstraction process.
In particular, the operations involved in the evaluation process must
be abstracted away. These involve  an abstraction from LISP's
call by value evaluation and its left to right
order of evaluation of arguments.
For example,
the operation of composition of LISP functions   is to denote 
mathematical composition;
 in LISP, %3car[cdr[(A#.#(B#.#C))]]%*  means apply the procedure %3cdr%* to the
argument %3(A#.#(B#.#C))%* getting %3(B#.#C)%*; apply the procedure 
%3car%* to %3(B#.#C)%*
getting %3B%*. Mathematically, we have a mapping  %dcar%fo%*cdr%1,
which is a composition of the %dcar%* and %dcdr%* mappings; the ordered
pair %d<(A#.#(B#.#C))#,#B>%* is in the graph of this composed mapping.
At this level of abstraction,
any LISP characterization of a function is equally good;
the "efficiency" question has been abstracted
away. Many important properties of real programs %6can%*  be 
discussed in this  mathematical context; 
in particular, questions of equivalence
and correctness of programs are more approachable.

As we remove the operational aspects, we discover  a few  critical properties of
functions which must be reconciled with LISP's procedures. We must
treat the ideas of binding of variables, and we must  handle the notion
of function application.

We know that there are at least two binding strategies available:
static binding and dynamic binding; we know 
that the choice of strategy can effect the 
resultant computation. This computational difference must be studied.
To
illuminate the problem we take an example in  LISP.


.GROUP;
Consider:
.BEGIN  SELECT 3;tabit3(10,14,18);
.BOXA
\λ[[z]
\\λ[[u]
\\\λ[[z] u[B]][C]] 
\\  [λ[[x] cons[x;z]]] ]
\  [A]
.BOXB
.END
.APART;
.FP
The dynamic binding strategy  will bind %3z%1 to %3A%1;
then bind  %3u%1 to the functional argument,
%3λ[[x]#cons[x;z]]%1; next, %3z%1 is rebound to %3C%1, and finally
%3u[B]%1 is evaluated. That  involves binding %3x%1 to %3B%1 and evaluating
%3cons[x;z]%1. Since we are using dynamic binding, the %6latest%1 bindings 
of the variables are used. The latest bindings for %3x%1 and %3z%1 are
%3B%1 and %3C%1 respectively, and the final value is therefore %3(B#.#C)%1.

We can obtain  static binding  by replacing
%3λ[[x]#cons[x;z]]%1 by
%3function[λ[[x]#cons[x;z]]]%1. This has the effect of associating the
variable %3z%1 with the atom %3A%1.  As we know, the final result of the
computation will be %3(B#.#A)%1.

Before discussing binding strategies further, we must
 strengthen our understanding of  the ideas
underlying  function application. It is this notion which a binding
strategy is implementing.  This involves a more careful study of the 
notion of λ-notation as the representation of a function.
We shall restrict out discussion to unary λ-expressions, since
n-ary functions can be represented in terms of unary functions:
.BEGIN CENTER;
.BOXA
%9λ%D((x y) %9x%d) = %9λ%d((x) %9λ%d((y) %9x%d))%1
.BOXB
.END

What properties do we expect a function, denoted by a λ-expression, to
possess?  For example, we expect
that a systematic renaming of formal parameters should not 
effect the definition of a function.
.BEGIN CENTER;SELECT 3;
.BOXA
λ[[y] x]%1### should denote the same function as### %3λ[[w] x]%1.
.END
.FP
But
.BEGIN CENTER;SELECT 3;

λ[[x] λ[[y] x]][w]%1###is not the same as### %3 λ[[x] λ[[w] x]][w]%1
.BOXB
.END
.FP
Such anomalies show that  we need to be  careful in
defining   our substitution rules.  Closer  examination of the last example
shows that the result %3λ[[w]#w]%1  would occur if
the substitution changed a non-local name (%3x%1) into  a local name (%3w%1).
The expected result would have been obtained if we had recognized the
clash of names and replaced the formal parameter %3y%1 with a new name, say
%3u%1, and then performed the substitution, getting %3λ[[u]#w]%1 which  is the
same as %3λ[[y]#w]%1.
Before giving a substitution rule which accounts for such  changes of name
we will introduce some new terminology.

.GROUP;
First, the "same as" relation will occur frequently in the  discussion; we
should establish some properties of this notion.
We introduce %9≡%1 to denote "is the same as";  we could therefore say

.BEGIN CENTER;SELECT 3;
.BOXA
λ[[y] x]%1 %9≡%1  %3λ[[w] x]%1.
.BOXB
.END
.APART
.GROUP;
.FP
We  expect that %9≡%1 obey the rules of equality, being a reflective,
transitive and symmetric relation. It should also "substitutive" in the 
following sense:
.BEGIN TABIT1(25);
.BOXA
\%1if %df %9≡%d g %1 then %df(a) %9≡%d g(a)
(%7s%1):%1\if %df %9≡%d g %1 then %9λ%d((x) f) %9≡%d %9λ%d((x) g)%1
\%1if %da %9≡%d b %1 then %df(a) %9≡%d f(b)
.BOXB
.END
.APART;



.GROUP;
Next, we need to  talk about the bindings of variables more carefully.
.P254:
A variable, %Dx%*, is  a %2⊗→free variable↔←%*⊗↓Compare
this definition of free with that in {yonss(P135)}.← in an expression, %9x%* if:

.BEGIN INDENT 0,10;GROUP;
.BOXA
%9x%* is the variable, %dx%*.
.PT2
%9x%* is an application, %df(A)%*, and %dx%* is free in %df%* or %dx%1 is free in %dA%*.
.PT2
%9x%* is a %9λ%*-expression, %9λ%d((y) M)%1, and %dx%* is free in %dM%* and %dx%* is
distinct from %dy%*.
.PT2;
.END
.APART
.FP
Thus %dw%1 is free in %9λ%d((x) w)%1.
.BOXB
.GROUP;
We can define a LISP predicate to test for free-ness:
.BEGIN SELECT 3;TABIT2(19,33);
.BOXA
isfree <= λ[[x;z]\[is_var[z] → eq[x;z];
\ is_app[z] → \[isfree[x;func[z]] → %et%3;
\\ %et%3 → isfree[x;arg%41%3[arglist[z]]]];
\ eq[λ_var[z];x] → %ef%3;
\ %et%3 → isfree[x;body[z]]]]
.BOXB
.END
.APART

A variable is a %2⊗→bound variable↔←%* in %9x%*
if it occurs in %9x%* and is not free in %9x%*.
For example, %dw%1 is bound in %9λ%d((w)#w)%1.

Using our new terminology,  we can say that a substitution of the actual parameter
for  free occurrences of the
formal parameter can be made provided no free variables in  the actual
parameter will become bound variables  as a result of the substitution.

.GROUP;
Here is an appropriate substitution rule:

.BEGIN SELECT 3;GROUP;TABs 21,28,38,40,44,49,51,55;nofill;turn on "\";
.<<BEGIN SELECT 3;GROUP;TABs 16,22,32,34,38,42,44,48;nofill;turn on "\";>>
.KRK
.BOXA
subst%9'%3 <= λ[[x;y;z]\[is_var[z] → [eq[y;z] → x; %et%3 → z];
\ is_app[z] → mk_app[\subst%9'%3[x;y;func[z]];
\\\\subst%9'%3[x;y;arg%41%3[arglist[z]]]];
\ eq[y;λ_var[z]] → z;
\ not[isfree[y;body[z]]] → z;
\ not[isfree[λ_var[z];x]] → mk_λ[\λ_var[z];
\\\\\\subst%9'%3[\x;
\\\\\\\\y;
\\\\\\\\body[z]]];
\ %et%3 → \λ[[u] mk_λ[\u;
\\\subst%9'%3[\x;
\\\\\y;
\\\\\subst%9'%3[\u;
\\\\\\\λ_var[z];
\\\\\\\body[z]]]]]
\\  [genvar[ ]] ]]
.BOXB
.END
.FP
where %3genvar%1 is to supply a new identifier for use as a variable name.
.APART;


.GROUP;
The substitution rule, %3subst%9'%1, is used to expressed  the ⊗→%7b%1-rule↔← of the 
%9λ%1-calculus:
.BEGIN CENTERIT;
.BOXA
(%7b%1):← %3app%1 %9≡%1 %3subst%9'%3[arg%41%3[arglist[app]];λ_var[func[app]];body[func[app]]]%1
.BOXB
.END
.FP
where %3app%1 is an anonymous λ-application.
.APART;

.GROUP;
There is another basic rule of the %9λ%1-calculus called the ⊗→%7a%1-rule↔←.
The %7a%1-rule generalizes the  notion that %3λ[[y]#x]%1 denotes the same
function as %3λ[[w]#x]%1; that is, subject  to certain restrictions, we can change
the formal parameter. The %7a%1-rule says:

.BEGIN CENTERIT;
.BOXA
(%7a%1):←%3fun%1 %9≡%1 %3λ[[u] mk_λ[u;subst%9'%3[u;λ_var[fun];body[fun]]][var]
←%1provided that %3not[isfree[var;body[fun]]]%1.
.BOXB
.END
.APART



.P261:
To summarize then, the %9λ%1 calculus is a formalism. The %7a%1 and %7b%1 rules
are transformation rules, and  %7s%1 expresses properties of the relation %9≡%1
as  rules of inference.
To complete the description, axioms which govern the
behavior of %9≡%1 as an equivalence relation must be given.
The %7a%1 and %7b%1-rules 
are called %2conversion rules%1; they express the essential behavior of functions and their
applications. 
The %7a%1 rule formalizes the notion that a function is unchanged
if we change its formal parameters. This 
property is related to %2⊗→referential transparency↔←%1.
A  language possesses
referential transparency if the value of an expression which contains subexpressions
depends only on the values of those subexpressions. 
 LISP is %6not%1 a referentially transparent language;
the value
of %3λ[[x]#cons[x;y]][A]%1 
depends on the environment surrounding this expression.
The difficulty again is the treatment of free variables: dynamic binding
violates referential transparency. The %9λ%1-calculus 
 %6does%1 possess referential transparency. Referential transparency
is not simply a worthwhile theoretical property; its corollary, static binding,
is a very useful practical property. 
In programming terms, referential transparency means that to understand a 
particular progam we need only understand the effect of the subprograms
rather than understand the implementation of those subprograms. This
idea is closely related to the notion of modular programming.
We will discuss some further
implications of static binding in {yonss(P260)}⊗↓There have been several
recent investigations (⊗↑[Hew#76]↑, ⊗↑[Sus#75]↑, ⊗↑[Ste#76b]↑, ⊗↑[Ste#76c]↑) 
of statically bound LISP-like languages.←.


The %7b%1-rule expresses the intent of function application.
We would then expect that a model of the %9λ%1-calculus would have
a domain consisting of functions; and require that the %7b%1-rule be 
reflected in the model as function application. The equality preserving properties
of the %7a%1 and %7b%1 rules
would require that 
if %df(a)#=#%dg(a)%1 in the model, then any %7a%1 or %7b%1 manipulations 
of  those expressions will not affect that equality. Note that we are
modelling %9≡%1 as %d=%1.


We are now in a position to relate binding  strategies with the ideas
of substitution and %7b%1 reduction. Static binding is an implementation
of the ideas expressed in the %7b%1 rule. We can implement the notions
using %3subst%9'%1 and
do explicit substitutions, or we can simulate the substitutions
using a symbol table device as LISP does. No problems should arise
if we use %3subst%9'%1; however this solution is not terribly efficient.
Particular care is needed if %3subst%9'%1 is to be simulated. The difficulty
arises in adequately modelling the substitution of values for  variables
which are free in functional arguments or functional values.
From LISP, we know one solution to this problem: use the %3function%1
construct. We could simulate the effect of %3subst%9'%1 by using a LISP
symbol table and requiring that every functional argument or value be
%3funarg%1-ed⊗↓Of course, such a claim should be proved.←.

From this standpoint, dynamic binding is simply an incorrect implementation
of substitution. Attempts to legitimize dynamic binding by
supplying formal rules lead to difficulties. 
The simple properties expressed in %3subst%9'%1 and
the %7b%1#rule do not hold for dynamic binding.
However, dynamic binding is a very useful programming tool. Its
ability to postpone bindings is particularly useful in
interactive programming environments where we are creating
program modules incrementally. The final word on binding strategies
has not been heard.

So far 
the  discussion has concentrated on binding strategies.
 We now  wish to discuss the implications of calling style.
We have discussed two  calling styles: call-by-value and call-by-name;
these computational devices must have  interpretations in mathematics. 
The conversion rules contain no instructions for their  order of application.
If the hypotheses for a rule is satisfied, then it may be applied.
 In the reduction of
a %9λ%1-expression there may be several reductions possible at any one time.
This is as it should be; we are extracting the procedural aspects, and certainly
calling style is procedural. The order of application of rules expresses
a particular calling  algorithm.
If we design a %9λ%1-calculus machine, we might  specify a preferred
order,  but the machine reflects "%6procedure%1"; the calculus 
reflects "%6function%1".

An interpretation of the conversion rules as rules of computation
requires the establishment of a notion of what is to be computed.
The conversion rules simply state equivalences between expressions; however
 the %7b%1#rule can be applied in a manner analogous to LISP's
λ-binding. That is,  it can be used to replace an application with the appropriately
substituted body. In this context the %7b%1#rule is called a %2⊗→reduction rule↔←%1;
and the application of the rule is called a reduction step.
There are two common strategies for choosing a reduction step:
applicative order and normal order.

Applicative order reduces right most expression;
normal order reduces the left#most expression.
We will say that a  %9λ%*-expression is in 
%2normal form%* if
it contains no expression reducible by  the %9β%* rule.
Not all expressions have normal forms: let %7D%1 name 
%9λ%d((x)#x(x))%1; then  %7D%d(%7D%d)%1 does not have a normal form.
Every %7b%1 transformation of %7D%1 produces a new expression which is
also %7b%1 reducible.
Not all reduction sequences yield a normal form: 
when %9λ%d((x)#y)(%7D%d)%1 is  reduced in normal order, a normal form
results; whereas
applicative order will not yield a normal form.

The  application of the reduction rules 
 can be considered to be a
computation scheme. The process of reducing an expression is the computation,
and a computation terminates if that reduction produces a normal form.
With this interpretation, some computations terminate  and
some don't. An expression has a normal form just 
in the case  that some reduction sequence 
terminates. 
A   %9λ%1-calculus machine (⊗↑[Lan#64]↑) 
 can simulate the reduction rules in several ways 
since no  order of application is specified by the rules.
Also,  a machine might perform the substitutions directly
 or might simulate the substitutions in a manner similar to LISP.
Finally we note the close relationships between  reduction orders and
calling styles:  applicative order is most naturally associated with call-by-value,
and call-by-name is reflected in normal order reduction.

These discussions suggest some interesting problems.
First, 
there may well be two or more sequences of reductions for a %9λ%*-expression;
assuming they both terminate,
 will they yield the same normal forms?  
In fact,
if both reduction sequences terminate then they
result in the same normal form. This is  called the Church-Rosser
theorem (⊗↑[Mil#73]↑, ⊗↑[Wad#74a]↑, ⊗↑[Leh#73]↑).

Second, if we have two %9λ%*-expressions which reduce
to distinct normal forms, are they
"inherently different" %9λ%*-expressions?
This  requires some explanation of "inherently different".
We might say that by definition, expressions with different
normal forms are "inherently different". 
But  thinking of %9λ%*-expressions as functions,
to  say two %9λ%*-expressions are "different" is to say we can exhibit arguments
such that the value of application of one function is
not equal to the value of the other function application.
 C.#Boehm has established this for %9λ%*-expressions
which have normal forms (⊗↑[Wad#71]↑).

The situation changes when we examine
%9λ%*-expressions which do   not have normal forms.
Recalling the intuitive  relationship between non-termination and "no normal form",
we might expect that
all  expressions without normal form  are "equivalent". However, it can be shown
that such an identification would lead to contradictions.
We might also expect
 that %9λ%1 expressions without normal forms are "different" from
expressions which do have normal forms.   This is  also 
not true;  ⊗↑[Wad#71]↑ exhibits
two expressions, %dI%1 and %dJ%1  with and without normal form, respectively.
These two expressions are the "same" in the sense that
3 and 2.99999#... are the "same"; %dJ%1 is the limit of a sequence
of approximations to %dI%1.
Also, we can exhibit two  %9λ%*-expressions,
 %dY%41%1 and %dY%42%1, both without normal form, 
 which are distinct
in that no reduction sequence will reduce one to the other; but 
 they are "the same function" in the sense that, for any
argument, %da%* we supply,  both reductions result in the same expression.
That is %dY%41%d(a)#=#Y%42%d(a)%1⊗↓Note that 
%df(a)%1  may have a normal form even though
 %df%1 does not.←.


The reduction rules of the %9λ%*-calculus cannot help us.
The resolution of  the idea of "same-ness" requires stronger analysis⊗↓The 
interpretation of "same-ness" which we have been using is  that of extensional
equality. That is, two functions are considered the same function if
no differences can be detected under application of the functions to any 
arguments.←.
We can
give an interpretation to the %9λ%*-calculus such that in 
that interpretation the 
pairs %dI%1 and %dJ%1, or %dY%41%1 and %dY%42%1,
have the same meaning. This should be a convincing
argument for maintaining that they are the "same function" even though
the reduction rules are %6not%* sufficient to display that equivalence 
⊗↓This demonstration also gives credence to the position that the
meaning  transcends the reduction rules.  Compare the incompleteness results
of K. Godel (⊗↑[Men#64]↑).←.
 D. Scott %6has%* exhibited a model or interpretation of
the %9λ%*-calculus, and D. Park has shown the equivalence in this model
of %dY%41%1 and %dY%42%1, and C.#Wadsworth as shown the equivalence of
%dI%1 and %dJ%1.

As we said at the beginning of the section, this calculus was 
intended to explicate the idea of "function" and "function application".
There is a reasonably subtle distinction between Church's conception
of a function as a rule of correspondence, and the usual set-theoretic
view of a function as a set of ordered pairs.
In the latter setting we rather naturally think of the elements of the domain
and range of a function as existing %6prior%* to the specification of the function:
.BEGIN TURN OFF "{","}";center;
.BOXA
"Let %df%* be the function %d{<x,1>, <y,2>, ...}%1".
.BOXB
.END
.FP
When we think of a function   given  as a predetermined set of ordered pairs
 we do not expect functions which can take
themselves as arguments like %df(f)%*. Such functions are called %2⊗→self-applicative↔←%*.
Several languages, including LISP, allow the procedural analog of
self applicative functions. They are an instance  of functional arguments#({yonss(P76)}).
The LISP function discussed in the problem on {yon(P99)} is an instance
of a self-applicative LISP function.
Since we can deal with self-application as a procedural concept
at least,    perhaps some of this  understanding will help
with the mathematical questions. 

The calculus is an appropriate tool for studying self-application since
any %9λ%1-expression may be applied to any %9λ%1-expression, including 
itself⊗↓There are  logical difficulties,
like Russell's paradox, which arise if we allow
unrestricted self-application.←.
Compare this
with the condition in LISP when we think of the S-expression
representation of the language as the language itself. Then the distinction
between program  and data disappears, just as it does in the %9λ%1-calculus.  

.GROUP;
For example, in the %6programming language%1 LISP,
we can evaluate expressions like:

.BEGIN CENTER;
.BOXA
%3((LAMBDA (X) %9x%3) (LAMBDA (X) %9x%3))%1.
.BOXB
.END
.APART

As we move again from procedural notions to  more denotational
concepts
we should remark that
denotational interpretations  have been introduced before.
When we said ({yon(P86)}) that:

.BEGIN CENTERIT;SELECT 3;
.P285:
.BOXA
←f[a%41%*; ...; a%4n%*] = eval[(F A%41%* ... A%4n%*)],
.BOXB
.END;
.FP
we were
relating a denotational notion with an operational
notion. The left hand side of this equation
is denotational; it expresses a functional relationship. 
The right hand side is operational, expressing a procedure call.
A proper mathematical theory should allow us to state such an equation
precisely and  should  contain methods which allow  us to demonstrate 
the correctness of the assertion.  Recent research (⊗↑[Sco#70]↑, ⊗↑[Sco#73]↑,
⊗↑[Wad#71]↑, ⊗↑[Gor#73]↑, ⊗↑[Gor#75]↑) has begun to develop such mathematical
methods.

This denotational-operational distinction is appropriate in a more general context.
When we are presented with someone's program and asked "what does it compute?"
we usually begin our investigation operationally, discovering "what does it 
%6do%*?"⊗↓Another 
common manifestation of this "denotation" phenomonon is the common
programmer complaint: "It's easier to write your own than to understand
someone else's."←.
Frequently, by tracing its execution, we can discover a denotational  description:
E.g.,#"ah!#it#computes#the#square#root".
.END

When %3great mother%* was presented it was given as an operational exercise,
with the primary intent of introducing the LISP evaluation process without
involving  complicated names and concepts. Forms involving %3great mother%* were 
evaluated perhaps without understanding the denotation, but if asked "what does
%3great mother%* do?" you would be hard pressed to give  a comprehensible
purely operational definition. However, you might have discovered the intended
nature of %3great mother%* yourself; then it would be relatively easy to describe
its (her) behavior. Indeed, once the denotation of %3great mother%* has been discovered
questions like "What is the value of %3tgmoaf[(CAR#(QUOTE#(A#.#B)))]%*? " 
are usually
answered by using the denotation of %3tgmoaf%*: "what is 
the value of %3car[(A#.#B)]%*?"


In discussing models for LISP,
we will parallel  our development of interpreters for LISP since each subset,
%3tgmoaf, tgmoafr%*, and %3eval%*,
will also introduce new problems for our mathematical semantics.
Our first LISP subset considers functions, composition, and constants.
Constants will be elements of our domain of interpretation.
That domain will include
the S-expressions since %6most%* LISP expressions denote S-exprs; since
 many of our LISP functions are partial functions, 
it is convenient to talk about the undefined value, %9B%1. We
wish to extend our partial functions  to be %2total%* functions on
an extended domain.
As before ({yon(P251)}),
we shall call this extended domain %dS%1.
.BEGIN CENTER;TURN OFF "}","{";
.BOXA
%dS%* =  %d<sexpr>%1#∪#{%9B%1}.
.BOXB
.END

Before we can discuss the  properties of
mathematical functions denoted by LISP functions,
we must give more careful study to the nature of domains.
Our primitive domain 
is  %d<atom>%*.
Its intuitive structure is quite simple, basically just a set of atoms
or names with no inherent relationships  among them.
Another primitive domain  is %dTr%1, the domain of truth values.
The domain %d<sexpr>%*   is more complex; it is  a set of elements, but many
elements are  related. 
In our discussion of %d<sexpr>%1  on {yon(P47)}
we made  it clear that there is more than syntax involved.
We could say that
for %ds%41%1 and %ds%42%1 in %d<sexpr>%*  the essence of "dotted pair"
is contained in the concept of the set-theoretic ordered pair, 
<%ds%41%1,%ds%42%1>. Thus the "meaning" of  the set of dotted pairs is
captured by Cartesian product, %d<sexpr>#%dx%*#<sexpr>%1.

.GROUP;
Let's continue the analysis of:
.BOXA
.BEGIN CENTERIT;
←<sexpr> ::= <atom> | (<sexpr> . <sexpr>).
.BOXB
.END
.APART
.FP
We are trying to interpret this BNF equation as a definition of the
domain %d<sexpr>%*. Reasonable interpretations of "::=" and "|" appear to
be equality and set-theoretic union, respectively. This results in the
equation:

.BEGIN CENTERIT;SELECT d;
.BOXA
←<sexpr> = <atom> %1∪%* <sexpr> %dx%* <sexpr> %1.
.BOXB
.END
.FP
This looks like an algebraic equation, and as such,
may or may not have  solutions.
This particular "domain equation" has a solution: the S-exprs.
There is a natural mapping of BNF equations onto such "domain equations",
and the solutions to the domain equations are sets satisfying 
the abstract essence of the BNF. 
The mapping process is also applicable to the language constructs.
Consider the  BNF equations for a simple applicative subset of LISP:
.BEGIN CENTERIT;
.BOXA
←<form> ::= <variable> | %3λ%*[[<variable>] <form>] |  <variable [<form>]
.BOXB
.END
.FP
We would like to describe the denotations of these equations in a
style similar to that used for <sexpr>'s.
The  denotations of the expressions, <form>,
 of applications, <variable>[form>], and of the  variables,
<variables>, are just elements of %dS%1.
Expressions of the form "%3λ%1[[<variable>]#<form>]"  denote
functions
from %dS%* to %dS%*. Write that set as
%dS%d→%dS%1. Then our domain equation is expressed:
.BOXA
.BEGIN CENTERIT;SELECT d;

←%dS%d = %dS%d→%dS %1∪ %dS%1
.BOXB
.END
.FP
This equation has
no %6interesting%1 solutions. A simple counting argument will establish that 
unless a domain %dC%* is a single element, then the number of functions
in %dC→C%* is greater than the number of elements in %dC%*.
This does %6not%1 say that there are no models for this LISP subset;
it  says  that our interpretation of "%d→%*"
is too broad. 

What is needed is an interpretation of functionality
which will allow a solution to the above domain equation; it
should allow a natural interpretation such that the properties which
we expect functions to possess are  true in the model.
D.#Scott gave one 
interpretation of "%d→%*" for the %9λ%1-calculus, defining
 the class of "continuous functions" (⊗↑[Sco#70]↑, ⊗↑[Sco#73]↑).
This class of functions is restricted enough to satisfy the domain equation,
  but broad enough to act as the denotations of procedures in
applicative programming languages.
We will use the notation  "%d[%1D%41%1#%d→%1#D%42%d]%1" to mean "the set of
continuous functions from domain D%41%1 to domain D%42%1".
It is the continuous functions which first supplied a model
 for the %9λ%1-calculus and it is these functions which 
 supply a basis for a mathematical model of applicative LISP (⊗↑[Gor#73]↑).

.GROUP;
We can assume that the LISP primitives
denote specific continuous functions. For example, the
mathematical counterpart to the LISP function %3car%* is the mapping %dcar%* from 
%dS%* to %dS%* defined as follows:

.BEGIN TABIT2(10,20);GROUP
.BOXA
\%dcar: [%dS%d → %dS%d]%1

\\%1is %9B%* if %dt%* is atomic
\%dcar(t)\%1is %dt%41%1 if %dt%* is %d(t%41%* . t%42%*)
\\%1is %9B%* if %dt%* is %9B%*.
.BOXB
.END
.APART;
.GROUP;
.FP
Similar strategy suffices to give denotations for the other primitive LISP functions
and predicates. For example: 

.BEGIN TABIT2(10,20);GROUP
.BOXA
\%datom: [%dS%d → %*S%d]%1

\\%1is %ef%* if %dt%* is not atomic.
\%datom(t)\%1is %et%* if %dt%* is atomic.
\\%1is %9B%* if %dt%* is %9B%*.
.BOXB
.END
.FP
Notice   that these  functions are strict: %df(%9B%d) = %9B%1.
.APART;



.GROUP;
Corresponding to %3tgmoaf%*, we will have a function, %9D%4tg%1, 
which maps expressions
onto their denotations.
Since  %9D%4tg%1 is  another mapping like  %9r%1, we will
use the  "%f(%1" and "%f)%1" brackets to enclose LISP constructs.
We need to introduce some notation for
elements of the sets <sexpr> and <form>. 


Let %5s%* 
be a meta-variable ranging over <sexpr>
and %5e%* range over <form>, then 
we can write:

.BEGIN centerit;GROUP;
.BOXA
←%9D%4tg%f(%5s%f)%1 = %ds%*
←%9D%4tg%f(%3car[%5e%*]%f)%1 = %dcar(%9D%4tg%f(%5e%f)%d)
.BOXB
.END
.FP
with similar entries for %3cdr, cons, eq, %1and %*atom%*.
The structure of this definition is very similar to that of %3tgmoaf%1.
.APART;

Now we continue to   the next subset of LISP, adding conditional
expressions. As we noted on {yon(P88)}, a degree of care need
be taken when we attempt to interpret conditional expressions in terms of mappings.
We can simplify the problem slightly: it is easy to show that the
 conditional expression can be formulated in terms of the  simple 
%3if%1-expression:
%3if[p%41%*;e%41%*;e%42%*]%1.
We will display a denotation for such %3if%1 expressions. It
will be a mathematical function, and therefore the evaluation order will have been
abstracted out⊗↓%1Recall the comment of Wadsworth
({yon(P91)}). Indeed, the use of conditional expressions in the more
abstract representations of LISP functions frequently is such that 
exactly one of the p%4i%*'s is %et%* and all the others are %ef%*.
Thus in this setting, the order of evaluation of the predicates is useful 
for "efficiency" but not necessary to maintain the sense of the definition.
See {yon(P101)}.←.

.BEGIN TABIT2(10,22);GROUP
Let %3if%1 denote %dif%1 where:
.BOXA
\%dif: [%dTr%*x%dS%*x%dS%* → %dS%*]

\\%1is%* y %1if%* x %1is%* %et%*
\%dif(x,y,z)\%1is %dz%1, if %dx%1 is %ef%*.
\\%1is %9B%1, otherwise
.BOXB
.END
.GROUP;
.FP
This interpretation of conditional expressions is appropriate for LISP; other
interpretations of conditionals are possible. For example:

.BEGIN TABIT2(10,24);GROUP
.BOXA
\%dif%41%*: [%dTr%*x%dS%*x%dS%* → %dS%*]

\\%1is%* y %1if%* x %1is%* %et%*
\%dif%41%*(x,y,z)\%1is %dz%1, if %dx%1 is %ef%*.
\\%1is %9B%1 if %dx%* is %9B%* and %dy ≠ z%*
\\%1is %dy%1 if %dx%* is %9B%* and %dy = z%*     ⊗↓%1Basing conditional 
expressions on %dif%41%1 would give %3[car[A]#→#1;# %et%3#→#1]%1
 value %31%1.←.
.BOXB
.END
.FP
Neither %dif%* nor %dif%41%1 are strict mappings.


.APART

.GROUP;

To add %3if%1 expressions to %9D%4tg%1, yielding %9D%4tgr%1
we include:
.BEGIN TABIT1(12);FILL;
.BOXA
\%9D%4tgr%f(%3if[%5e%41%3; %5e%42%3; %5e%43%3]%f)%1 =
%dif(%9D%4tgr%f(%5e%41%f)%d, %9D%4tgr%f(%5e%42%f)%d, %9D%4tgr%f(%5e%43%f)%d)%1
.BOXB
.END
.APART


The next consideration is the denotational description of LISP identifiers.
Identifiers name either S-exprs or LISP functions.
Thus
an identifier denotes either an object on our domain %dS%1 or denotes a function
object.
Let %dFn%* name the set of continuous functions:#%9S%4n=1%d[%dS%8n%d#→#%dS%d]%1,
and %dId%1 be %d<identifier>%1∪%9B%1.
We know that the value of a LISP <identifier> ({yon(P66)})  depends on the 
current environment. 
Then we might characterize  the set of environments, %denv%1, as:
.BEGIN CENTER
.BOXA
%d[%dId%1 → ##%dS%1∪%dFn%1%d]%1.
.BOXB
.END
.FP
That is, an element of %denv%* is a continuous function which maps an identifier
either onto a S-expr or onto an n-ary function from S-exprs to S-exprs.
This is the essence of the argument used in introducing %3assoc%* ({yonss(P92)}).
Note  that %3assoc[x;l]#=#%dl(x)%1 is another instance of a 
operational-denotational relationship.

.BEGIN TURN OFF "{","}";
For example,
given a LISP identifier  %3x%*  and a member of %denv%*, say 
the function %d{<x,2>,<y,4>}%*, then 
%9D%* should map %3x%* onto %d2%*. This is an intuitive way of saying
that %9D%* should map a member of <identifier> onto a %6function%*. This function
will map a member of %denv%* onto an element of %dS%*.
Introducing %5i%* as a meta-variable to range over <identifier>,
then for %dl#%9ε%d#env%1 we have:
.BOXA
.CENTER;
%9D%f(%5i%f)%d(l)%1#=#%dl(i)%1
.BOXB
.END
.FP
The %6denotation%1 of an identifier is a function;
whereas the %6value%* of an identifier is an element of %dS%1∪%dFn%1.

The treatment of identifiers leads directly into the
denotional aspects of function application.
We shall maintain the parallels between evaluation and denotation, by giving
%9D%4e%1 and %9D%4a%1 corresponding to %3eval%* and %3apply%*.
Let %5f%1  be a member of <function> and %5e%1 be a member of <form>, then
for a given element of %denv%1, %9D%4a%1 maps %5f%1 onto an element of
%dFn%1, and %9D%4e%1 maps %5e%1 onto an element of %dS%1.

.P282:
.BEGIN CENTERit;
For example: ←%9D%4a%F(%3car%f)%d(l)%1 = %dcar%1 for all %dl%1 in %denv%1.
.END
.PT2
.GROUP;
.FP
Similar equations hold for the other LISP primitive functions and predicates.
In general, then:
.BEGIN CENTER;
.BOXA
%9D%4a%f(%5f%f)%d(l)%1 = %dl(f)%*.
.BOXB
.END
.APART
To describe the evaluation of a function-call in LISP we must add
an equation to %9D%4e%1:
.BEGIN  TABIT1(15);FILL;TURN ON "#";
.BOXA
\%9D%4e%f(%5f%1[%5e%41%1,#..., %5e%4n%1]%f)%d(l)%1#=#
%9D%4a%f(%5f%f)%d(l)(%9D%4e%f(%5e%41%f)%d(l)%1,#..., %9D%4e%f(%5e%4n%f)%d(l))%1
.BOXB
.END
.FP
We must also make consistent modifications to the previous clauses of %9D%4tgr%1 to
account for environments.
That is, the value of a constant is independent of the
specific environment in which it is evaluated. 
.BEGIN TURN OFF "{","}";TURN ON "#";center;
.BOXA
%9D%4e%f(%5s%f)%d(l)%1#=#%ds%1 for all %dl%1 in %denv%1.
.BOXB
.END
.FP
We must also extend our equations to account for conditional expressions.


To discuss function application
we must give a mathematical characterization of function definitions.
In this section we will
 handle λ-notation without free variables, postponing more complex
cases to {yonss(P90)}.

Assuming the only free variables in %9x%* are among the %3x%4i%1's,
the denotation of %3λ[[x%41%*;#...;#x%4n%*] %9x%1] in a specified
environment should be a function 
from %dS%8n%1 to %dS%* such that:
.BEGIN FILL;TURN ON "#";
.BOXA
%9D%4a%f(%3λ[[%5v%41%1;#...#;%5v%4n%1] %5e%1]%f)%d(l)%1#=#
%d%9λ%d((x%41%*,#...,#%dx%4n%*) %9D%4e%f(%5e%f)%d(l#:#<x%41%*,%dv%41%d>,#...,#<x%4n%*,%dv%4n%d>))%1
.BOXB
.END
.FP
where λ is the LISP λ-notation and %9λ%* is its mathematical counterpart⊗↓We
have written a %9λ%1-expression with more than one formal parameter. This
can be justified  as an abbreviation: 
%9λ%d((x#y)#M)#=#%9λ%d((x)#%9λ%d((y)#M))%1.← and
%dv%4i%1 is the denotational counterpart of %5v%4i%1, and %d(l#:#...#)%1 means
the environment %dl%1 augmented with the explicitly given pairs.
.GROUP;
That is,
%d(l#:#<x%41%*,%dv%41%d>,#...,#<x%4n%*,%dv%4n%d>)%1 is a modification
of %dl%* such that each %dv%4i%1 is bound to  the corresponding %dx%4i%1:

.BEGIN TABS 10,30,33,36,39;TURN ON "\"; NOFILL;GROUP;
.KRK
.BOXA
\%d(l#:#<x,v>)(v%41%d)%1 is:\%Dif(\v = %9B%*,
\\\%9B%d,
\\\if(\%dv%41%* = %9B%2,
\\\\%9B%d,
\\\\if(\%dv%41%* = x%d,
\\\\\%dv,
\\\\\%dl(v%41%*))))
.BOXB
.END
.APART;
.FP
In more detail:
%9λ%d((x%41%*, ... ,x%4n%*) e(x%41%*, ... ,x%4n%*)) %1is a function %df%*: %d[%dS%8n%1 → %dS%d]%1 such that:

.BEGIN TABIT1(15);GROUP;
.BOXA
\is %de(t%41%*, ... ,t%4n%*) %1if m%9≥%*n and  for every i, %dt%4i%1 %9≠%* %9B%1.⊗↓Note 
that this equation models the LISP trick of supplying too many arguments.
Other anomalies of LISP, including dynamic binding, are describable using these
techniques (⊗↑[Gor#73]↑, ⊗↑[Gor#75]↑).←
%df(t%41%*, ..., t%4m%*)  %1
\is %9B%* otherwise
.BOXB
.END
.FP
.GROUP;
Given  this basic outline, we can more accurately describe the "equation"
of {yon(P285)}:

.BEGIN CENTERIT;SELECT 3;
.BOXA
←f[a%41%*; ...; a%4n%*] = eval[(F A%41%* ... A%4n%*)],
.BOXB
.END;
Namely;
.BEGIN "YY"
.P289:
.BOXA
%9D%4e%f(%3eval[%eR%f(%5f%3[%5e%41%3; ... %5e%4n%3]%f)%3;%eR%f(%5a%f)%3]%f)%d(l%4init%d)%1#=#
%9D%4a%f(%5f%f)%d(l%4new%d)(%9D%4e%f(%5e%41%f)%d(l%4new%d), ..., %9D%4e%f(%5e%4n%f)%d(l%4new%d))%1
.BOXB
.END "YY"
.FP
where %dl%4init%1  is the initial symbol table and %dl%4new%1 is %dl%4init%1
augmented with the entires from %5a%1.

.APART;

One of the major difficulties in supplying models for applicative languages
is caused by the type-free requirement⊗↓It was not until 
1969 that a model for the %9λ%1-calculus was discovered even though
the formalism was invented in the late 1930's.←. Self-application
is one indication of this. We can show that imposing a type structure on our language
will solve many problems. In a 
typed %9λ%1-calculus
an expression will always have a normal form#(⊗↑[Mor#68]↑). 
Computationally this means that all the programs will terminate.
Also, models for typed %9λ%1-calculus are much more readily attained#(⊗↑[Mil#73]↑).
However  the type free calculus is a stronger system, and requiring all expressions
to have a consistent type structure rules out several useful constructs;
in particular, the %9λ%1-calculus counterpart to the LISP %3label%1 operator
cannot be consistently typed.

From the practical side, a typed structure is  a mixed blessing.
Language delarations are a form of typing and can be quite helpful in
pinpointing programming errors. Declarations can also be used by compilers to 
help produce optimized code. However,
 a type structure can be a real nuisance when trying to debug a program.
It is frequently desirable to  examine and 
modify the representations of abstract data structures. Those kinds of operations
imply the ability to ignore the type information.

Logically, the next addition to %7D%1 would involve recursion and function
definitions: %3label%1 and "<=".
We know that the LISP %3label%1 operator is similar to "<=", but
%3label%1 builds a temporary definition, while "<=" modifies the
global environment. 
Programming language constructs which modify the environment
are said to have %2side-effects%1 and  are an instance
of what is called a imperative construct.
Since our main interest lies in the programming aspects, we will
postpone the mathematics.
The next chapter introduces the procedural aspects of 
imperative constructs but  in {yonss(P90)}
we will investigate some of the mathematical aspects of
"<=" and %3label%1.

As a final bridge between theory and practice we
will use   LISP  to introduce 
  one of the
fundamental results in recursion theory: a proof of the non-existence
of an algorithm to determine whether or not a LISP function is total
This is also called the unsolvability of the ⊗→halting problem↔←, since the
existence of such an algorithm would tells us whether a LISP function
would terminate for all inputs⊗↓Again, we use
"LISP function" as a synonym for  "algorithm". To complete
the  halting argument  we must show that every algorithm is expressible in LISP.←.
That algorithm  does not exist⊗↓The
 argument is adapted from ⊗↑[Lev#un]↑.←.

.GROUP;
.P288:
The proof depends on our knowledge about the function %3apply%*. The fundamental
relationship is:

.BEGIN INDENT 10,10;
.BOXA
For a function %3f%* and arguments %3a%41%1,#...#,%3a%4n%1 we know
that if 
%3f[a%41%3;#...#;a%4n%3]%1 is defined then 
.BEGIN CENTER;
.PT2;
%3f[a%41%3;#...;a%4n%3]%1 =  %3apply[%eR%f(%3f%f)%3;list[%eR%f(%3a%41%f)%3;#...#;%eR%f(%3a%4n%f)%3];env]%1
.BOXB
.END
.APART;
.END
.FP
Compare this  equation  with the equation on {yon(P289)}.
This property of %3apply%* makes it a %2⊗→universal function↔←%* for
LISP in the sense that if %3apply%* is given an encoding of a function, of
some arguments to be applied, and an environment which contains 
the definition of %3f%* and all the
necessary subsidiary definitions needed by %3f%*, then %3apply%*
can simulate the behavior of %3f%*. 

We will assume that the representation of %3env%* is  the standard
a-list of dotted pairs: representation of name dotted with representation 
of λ-expression. Given a function named %3g%*, together with its λ-definition
we will designate the S-expr representation  of the dotted pair as %3g%8R%1.

.GROUP;
For example, given
.BOXA
.BEGIN CENTER;
%3fact <= λ[[x][x = 0 → 1;%et%* → *[x;fact[x-1]]]];%1
.END
.BOXB
.FP
Then %3fact%8R%1 is:
.BOXA
.BEGIN GROUP;SELECT 3; TABIT3(22,32,40);
(FACT . (LAMBDA\(X) 
\(COND\((ZEROP X) 1)
\\(T (TIMES\X 
\\\(FACT (SUB1 X))))))).%1
.END
.BOXB
.APART;

.GROUP;
.FP
Next,
if %3f%* refers to %3f%41%1 through %3f%4n%1 in its evaluation,
we will represent the environment as:
.BEGIN CENTER; SELECT 3;
.BOXA
list[f%8R%3;f%41%8R%3;...#;f%4n%8R%3]
.BOXB
.END
.APART;
.FP;
Finally, we will identify such an environment structure as the representation of the
definition of the %6first%1 function in that environment. For example,
a complete definition of %3fact%1 would  be an environment beginning 
with %3fact%8R%1
and  followed by  %3zerop%8R%1, %3times%8R%1 or %3sub1%8R%1 if any
of those functions were not considered primitive.

.GROUP;
Now assume the existence of a unary predicate %3total%* such that:
.BEGIN TABIT1(10);
.BOXA
\gives %et%* if %3x%* is a representation of a total unary⊗↓This discussion 
will nominally concern unary functions, but the generalization to
n-ary functions is immediate.←function.
%3total[x]%1  
\gives %ef%1 in all other cases.
.BOXB
.END
.APART;
.FP
Notice that if %3total[list[f%8R%3;#...]]%1 is true, then for arbitrary %3a%*
we will have %3apply[name[f%8R%3];list[a];#list[f%8R%3;#...]]%1 terminating and
giving value %3f[a]%*.

.GROUP;
Now we define a function:
.BEGIN SELECT 3;CENTER;
.BOXA
diag <= λ[[x][total[x] → list[apply[name[first[x]];list[x];x]]; %et%* → %ef%*]].
.BOXB
.END
.FP
Note that %3diag%1  is total. Now consider %3diag%8R%1:
.BEGIN SELECT 3;tabit3(22,30,56);group;
.BOXA
(DIAG . (LAMBDA\(X)
\(COND\((TOTAL X) (LIST (APPLY\(NAME (FIRST X))
\\\(LIST X) 
\\\X)))
\\(T NIL))) )
.BOXB
.END
.APART;
Finally, form  %3list[diag%8R%3; total%8R%3; apply%8R%3; ...]%1. 
Call the resulting list %3denv%1. That list
 will be the representation of %3diag%* and all its necessary
functions.

.GROUP;
Now consider the evaluation of %3diag[denv]%*. 
Since %3diag %6is%1 total, then %3total[denv]%* is
true, and we can    reduced the problem to:
.BEGIN CENTER; SELECT 3;
.BOXA
list[apply[name[first[denv]];list[denv];denv]]
.BOXB
.END
.FP
but %3name[first[denv]] = DIAG%1; and therefore the call on %3apply%* reduces
to 
%3apply[DIAG;list[denv];denv]%*. But that's just %3diag[denv]%*, and we've
shown %3diag[denv]#=#list[diag[denv]]%1. That's just not possible. Thus the
assumption of the existence of %3total%* must be in error.
.APART;

The usual   proof of this result is given in number theory and
involves encoding the functions into the integers
and then expressing  the equivalent of the %3apply%* function as an algorithm in
number theory. 
The  encoding in the integers is analogous to what %6we%* did
in encoding in the S-expressions. 
This is the  problem of
representation again.
 LISP %6is%* more than a programming language; it is also a formalism for discussing
computation.


.CENT(Problems)
.GROUP;
.P271:
.NL
1.##Recall the problem on {yon(P99)}, dealing with the following
factorial algorithm:
.BEGIN CENTERIT;SELECT 3;GROUP;
.PT2
←fact <= λ[[n] f[function[f]; n]]
.PT2
.ONCE INDENT 4,4;
%1where:%3←f <= λ[[g;n][n=0 → 1; %et%* → *[n; g[g; n-1]] ]]
.pt2
.END
.ONCE INDENT 4,4;
Rewrite %3fact%1 in terms a unary function %7t%1:
.BEGIN CENTER;
.pt2
%7t%3 <= λ[[x] function[λ[[n][n=0 → 1; %et%* → *[n; x[n-1]] ]]]]%1.
.END
.pt2
Show that %3fact%1 = %7t%3[fact]%1.
.APART

.BEGIN GROUP;
.NL
2.##The language  described by the %7a%1 and %7b%1 rules 
doesn't look particularly rich, similar in power to LISP
with just function application but without conditional expressions.
That is only an illusion. Show that we can represent a simple
%3if%1 function %3if[p;then;otherwise]%1.
Hint: show that
%9λ%d((x#y)#y)%1 is a good representation for %ef%1 and
%9λ%d((x#y)#x)%1 is a good representation for %et%1.

.END